37 research outputs found

    Generation of Paths in a Maze using a Deep Network without Learning

    Full text link
    Trajectory- or path-planning is a fundamental issue in a wide variety of applications. Here we show that it is possible to solve path planning for multiple start- and end-points highly efficiently with a network that consists only of max pooling layers, for which no network training is needed. Different from competing approaches, very large mazes containing more than half a billion nodes with dense obstacle configuration and several thousand path end-points can this way be solved in very short time on parallel hardware

    Hippocampal CA1 place cells encode intended destination on a maze with multiple choice points

    Get PDF
    The hippocampus encodes both spatial and nonspatial aspects of a rat's ongoing behavior at the single-cell level. In this study, we examined the encoding of intended destination by hippocampal (CA1) place cells during performance of a serial reversal task on a double Y-maze. On the maze, rats had to make two choices to access one of four possible goal locations, two of which contained reward. Reward locations were kept constant within blocks of 10 trials but changed between blocks, and the session of each day comprised three or more trial blocks. A disproportionate number of place fields were observed in the start box and beginning stem of the maze, relative to other locations on the maze. Forty-six percent of these place fields had different firing rates on journeys to different goal boxes. Another group of cells had place fields before the second choice point, and, of these, 44% differentiated between journeys to specific goal boxes. In a second experiment, we observed that rats with hippocampal damage made significantly more errors than control rats on the Y-maze when reward locations were reversed. Together, these results suggest that, at the start of the maze, the hippocampus encodes both current location and the intended destination of the rat, and this encoding is necessary for the flexible response to changes in reinforcement contingencies

    Multi Sentence Description of Complex Manipulation Action Videos

    Full text link
    Automatic video description requires the generation of natural language statements about the actions, events, and objects in the video. An important human trait, when we describe a video, is that we are able to do this with variable levels of detail. Different from this, existing approaches for automatic video descriptions are mostly focused on single sentence generation at a fixed level of detail. Instead, here we address video description of manipulation actions where different levels of detail are required for being able to convey information about the hierarchical structure of these actions relevant also for modern approaches of robot learning. We propose one hybrid statistical and one end-to-end framework to address this problem. The hybrid method needs much less data for training, because it models statistically uncertainties within the video clips, while in the end-to-end method, which is more data-heavy, we are directly connecting the visual encoder to the language decoder without any intermediate (statistical) processing step. Both frameworks use LSTM stacks to allow for different levels of description granularity and videos can be described by simple single-sentences or complex multiple-sentence descriptions. In addition, quantitative results demonstrate that these methods produce more realistic descriptions than other competing approaches

    Action Prediction in Humans and Robots

    Full text link
    Efficient action prediction is of central importance for the fluent workflow between humans and equally so for human-robot interaction. To achieve prediction, actions can be encoded by a series of events, where every event corresponds to a change in a (static or dynamic) relation between some of the objects in a scene. Manipulation actions and others can be uniquely encoded this way and only, on average, less than 60% of the time series has to pass until an action can be predicted. Using a virtual reality setup and testing ten different manipulation actions, here we show that in most cases humans predict actions at the same event as the algorithm. In addition, we perform an in-depth analysis about the temporal gain resulting from such predictions when chaining actions and show in some robotic experiments that the percentage gain for humans and robots is approximately equal. Thus, if robots use this algorithm then their prediction-moments will be compatible to those of their human interaction partners, which should much benefit natural human-robot collaboration

    Differential Hebbian learning with time-continuous signals for active noise reduction

    Get PDF
    Spike timing-dependent plasticity, related to differential Hebb-rules, has become a leading paradigm in neuronal learning, because weights can grow or shrink depending on the timing of pre- and post-synaptic signals. Here we use this paradigm to reduce unwanted (acoustic) noise. Our system relies on heterosynaptic differential Hebbian learning and we show that it can efficiently eliminate noise by up to -140 dB in multi-microphone setups under various conditions. The system quickly learns, most often within a few seconds, and it is robust with respect to different geometrical microphone configurations, too. Hence, this theoretical study demonstrates that it is possible to successfully transfer differential Hebbian learning, derived from the neurosciences, into a technical domain
    corecore